Memory processes underlying long-term semantic priming
نویسندگان
چکیده
منابع مشابه
A Growing Long-term Episodic & Semantic Memory
The long-term memory of most connectionist systems lies entirely in the weights of the system. Since the number of weights is typically fixed, this bounds the total amount of knowledge that can be learned and stored. Though this is not normally a problem for a neural network designed for a specific task, such a bound is undesirable for a system that continually learns over an open range of doma...
متن کاملLong-term semantic priming: a computational account and empirical evidence.
Semantic priming is traditionally viewed as an effect that rapidly decays. A new view of long-term word priming in attractor neural networks is proposed. The model predicts long-term semantic priming under certain conditions. That is, the task must engage semantic-level processing to a sufficient degree. The predictions were confirmed in computer simulations and in 3 experiments. Experiment 1 s...
متن کاملLong-term memory underlying hippocampus-dependent social recognition in mice.
The ability to learn and remember individuals is critical for the stability of social groups. Social recognition reflects the ability of mice to identify and remember conspecifics. Social recognition is assessed as a decrease in spontaneous investigation behaviors observed in a mouse reexposed to a familiar conspecific. Our results demonstrate that group-housed mice show social memory for a fam...
متن کاملUncovering underlying processes of semantic priming by correlating item-level effects.
The current study examines the underlying processes of semantic priming using the largest priming database available (i.e., Semantic Priming Project, Hutchison et al. Behavior Research Methods, 45(4), 1099-1114, 2013). Specifically, it compares priming effects in two tasks: lexical decision and pronunciation. Task similarities were assessed at two different stimulus onset asynchronies (SOAs) (i...
متن کاملDAG-Structured Long Short-Term Memory for Semantic Compositionality
Recurrent neural networks, particularly long short-term memory (LSTM), have recently shown to be very effective in a wide range of sequence modeling problems, core to which is effective learning of distributed representation for subsequences as well as the sequences they form. An assumption in almost all the previous models, however, posits that the learned representation (e.g., a distributed r...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Memory & Cognition
سال: 2018
ISSN: 0090-502X,1532-5946
DOI: 10.3758/s13421-018-0867-8